757 research outputs found

    Automatic near real-time selection of flood water levels from high resolution Synthetic Aperture Radar images for assimilation into hydraulic models: a case study

    Get PDF
    Flood extents caused by fluvial floods in urban and rural areas may be predicted by hydraulic models. Assimilation may be used to correct the model state and improve the estimates of the model parameters or external forcing. One common observation assimilated is the water level at various points along the modelled reach. Distributed water levels may be estimated indirectly along the flood extents in Synthetic Aperture Radar (SAR) images by intersecting the extents with the floodplain topography. It is necessary to select a subset of levels for assimilation because adjacent levels along the flood extent will be strongly correlated. A method for selecting such a subset automatically and in near real-time is described, which would allow the SAR water levels to be used in a forecasting model. The method first selects candidate waterline points in flooded rural areas having low slope. The waterline levels and positions are corrected for the effects of double reflections between the water surface and emergent vegetation at the flood edge. Waterline points are also selected in flooded urban areas away from radar shadow and layover caused by buildings, with levels similar to those in adjacent rural areas. The resulting points are thinned to reduce spatial autocorrelation using a top-down clustering approach. The method was developed using a TerraSAR-X image from a particular case study involving urban and rural flooding. The waterline points extracted proved to be spatially uncorrelated, with levels reasonably similar to those determined manually from aerial photographs, and in good agreement with those of nearby gauges

    The effects of spatial resolution and dimensionality on modeling regional-scale hydraulics in a multichannel river

    Get PDF
    As modeling capabilities at regional and global scales improve, questions remain regarding the appropriate process representation required to accurately simulate multichannel river hydraulics. This study uses the hydrodynamic model LISFLOOD-FP to simulate patterns of water surface elevation (WSE), depth, and inundation extent across a ∼90 km, anabranching reach of the Tanana River, Alaska. To provide boundary conditions, we collected field observations of bathymetry and WSE during a 2 week field campaign in summer 2013. For the first time at this scale, we test a simple, raster-based model's capabilities to simulate 2-D, in-channel patterns of WSE and inundation extent. Additionally, we compare finer resolution (≤25 m) 2-D models to four other models of lower dimensionality and coarser resolution (100–500 m) to determine the effects of simplifying process representation. Results indicate that simple, raster-based models can accurately simulate 2-D, in-channel hydraulics in the Tanana. Also, the fine-resolution, 2-D models produce lower errors in spatiotemporal outputs of WSE and inundation extent compared to coarse-resolution, 1-D models: 22.6 cm versus 56.4 cm RMSE for WSE, and 90% versus 41% Critical Success Index values for simulating inundation extent. Incorporating the anabranching channel network using subgrid representations for smaller channels is important for simulating accurate hydraulics and lowers RMSE in spatially distributed WSE by at least 16%. As a result, better representation of the converging and diverging multichannel network by using subgrid solvers or downscaling techniques in multichannel rivers is needed to improve errors in regional to global-scale models

    Fuzzy cluster validation using the partition negentropy criterion

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-04277-5_24Proceedings of the 19th International Conference, Limassol, Cyprus, September 14-17, 2009We introduce the Partition Negentropy Criterion (PNC) for cluster validation. It is a cluster validity index that rewards the average normality of the clusters, measured by means of the negentropy, and penalizes the overlap, measured by the partition entropy. The PNC is aimed at finding well separated clusters whose shape is approximately Gaussian. We use the new index to validate fuzzy partitions in a set of synthetic clustering problems, and compare the results to those obtained by the AIC, BIC and ICL criteria. The partitions are obtained by fitting a Gaussian Mixture Model to the data using the EM algorithm. We show that, when the real clusters are normally distributed, all the criteria are able to correctly assess the number of components, with AIC and BIC allowing a higher cluster overlap. However, when the real cluster distributions are not Gaussian (i.e. the distribution assumed by the mixture model) the PNC outperforms the other indices, being able to correctly evaluate the number of clusters while the other criteria (specially AIC and BIC) tend to overestimate it.This work has been partially supported with funds from MEC BFU2006-07902/BFI, CAM S-SEM-0255-2006 and CAM/UAM project CCG08-UAM/TIC-442

    Professionalism, Golf Coaching and a Master of Science Degree: A commentary

    Get PDF
    As a point of reference I congratulate Simon Jenkins on tackling the issue of professionalism in coaching. As he points out coaching is not a profession, but this does not mean that coaching would not benefit from going through a professionalization process. As things stand I find that the stimulus article unpacks some critically important issues of professionalism, broadly within the context of golf coaching. However, I am not sure enough is made of understanding what professional (golf) coaching actually is nor how the development of a professional golf coach can be facilitated by a Master of Science Degree (M.Sc.). I will focus my commentary on these two issues

    Piecewise Approximate Bayesian Computation: fast inference for discretely observed Markov models using a factorised posterior distribution

    Get PDF
    Many modern statistical applications involve inference for complicated stochastic models for which the likelihood function is difficult or even impossible to calculate, and hence conventional likelihood-based inferential techniques cannot be used. In such settings, Bayesian inference can be performed using Approximate Bayesian Computation (ABC). However, in spite of many recent developments to ABC methodology, in many applications the computational cost of ABC necessitates the choice of summary statistics and tolerances that can potentially severely bias the estimate of the posterior. We propose a new “piecewise” ABC approach suitable for discretely observed Markov models that involves writing the posterior density of the parameters as a product of factors, each a function of only a subset of the data, and then using ABC within each factor. The approach has the advantage of side-stepping the need to choose a summary statistic and it enables a stringent tolerance to be set, making the posterior “less approximate”. We investigate two methods for estimating the posterior density based on ABC samples for each of the factors: the first is to use a Gaussian approximation for each factor, and the second is to use a kernel density estimate. Both methods have their merits. The Gaussian approximation is simple, fast, and probably adequate for many applications. On the other hand, using instead a kernel density estimate has the benefit of consistently estimating the true piecewise ABC posterior as the number of ABC samples tends to infinity. We illustrate the piecewise ABC approach with four examples; in each case, the approach offers fast and accurate inference

    Measurement of the partial widths of the Z into up- and down-type quarks

    Full text link
    Using the entire OPAL LEP1 on-peak Z hadronic decay sample, Z -> qbarq gamma decays were selected by tagging hadronic final states with isolated photon candidates in the electromagnetic calorimeter. Combining the measured rates of Z -> qbarq gamma decays with the total rate of hadronic Z decays permits the simultaneous determination of the widths of the Z into up- and down-type quarks. The values obtained, with total errors, were Gamma u = 300 ^{+19}_{-18} MeV and Gamma d = 381 ^{+12}_{-12} MeV. The results are in good agreement with the Standard Model expectation.Comment: 22 pages, 5 figures, Submitted to Phys. Letts.

    Search for R-Parity Violating Decays of Scalar Fermions at LEP

    Full text link
    A search for pair-produced scalar fermions under the assumption that R-parity is not conserved has been performed using data collected with the OPAL detector at LEP. The data samples analysed correspond to an integrated luminosity of about 610 pb-1 collected at centre-of-mass energies of sqrt(s) 189-209 GeV. An important consequence of R-parity violation is that the lightest supersymmetric particle is expected to be unstable. Searches of R-parity violating decays of charged sleptons, sneutrinos and squarks have been performed under the assumptions that the lightest supersymmetric particle decays promptly and that only one of the R-parity violating couplings is dominant for each of the decay modes considered. Such processes would yield final states consisting of leptons, jets, or both with or without missing energy. No significant single-like excess of events has been observed with respect to the Standard Model expectations. Limits on the production cross- section of scalar fermions in R-parity violating scenarios are obtained. Constraints on the supersymmetric particle masses are also presented in an R-parity violating framework analogous to the Constrained Minimal Supersymmetric Standard Model.Comment: 51 pages, 24 figures, Submitted to Eur. Phys. J.

    Measurement of the Strong Coupling alpha s from Four-Jet Observables in e+e- Annihilation

    Full text link
    Data from e+e- annihilation into hadrons at centre-of-mass energies between 91 GeV and 209 GeV collected with the OPAL detector at LEP, are used to study the four-jet rate as a function of the Durham algorithm resolution parameter ycut. The four-jet rate is compared to next-to-leading order calculations that include the resummation of large logarithms. The strong coupling measured from the four-jet rate is alphas(Mz0)= 0.1182+-0.0003(stat.)+-0.0015(exp.)+-0.0011(had.)+-0.0012(scale)+-0.0013(mass) in agreement with the world average. Next-to-leading order fits to the D-parameter and thrust minor event-shape observables are also performed for the first time. We find consistent results, but with significantly larger theoretical uncertainties.Comment: 25 pages, 15 figures, Submitted to Euro. Phys. J.

    Measurement of the Hadronic Photon Structure Function F_2^gamma at LEP2

    Get PDF
    The hadronic structure function of the photon F_2^gamma is measured as a function of Bjorken x and of the factorisation scale Q^2 using data taken by the OPAL detector at LEP. Previous OPAL measurements of the x dependence of F_2^gamma are extended to an average Q^2 of 767 GeV^2. The Q^2 evolution of F_2^gamma is studied for average Q^2 between 11.9 and 1051 GeV^2. As predicted by QCD, the data show positive scaling violations in F_2^gamma. Several parameterisations of F_2^gamma are in agreement with the measurements whereas the quark-parton model prediction fails to describe the data.Comment: 4 pages, 2 figures, to appear in the proceedings of Photon 2001, Ascona, Switzerlan

    A measurement of the tau mass and the first CPT test with tau leptons

    Full text link
    We measure the mass of the tau lepton to be 1775.1+-1.6(stat)+-1.0(syst.) MeV using tau pairs from Z0 decays. To test CPT invariance we compare the masses of the positively and negatively charged tau leptons. The relative mass difference is found to be smaller than 3.0 10^-3 at the 90% confidence level.Comment: 10 pages, 4 figures, Submitted to Phys. Letts.
    corecore